Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set object lock for volumes for cephfs encryption #4697

Merged
merged 1 commit into from
Jul 11, 2024

Conversation

Sunnatillo
Copy link
Contributor

@Sunnatillo Sunnatillo commented Jun 27, 2024

Describe what this PR does

Provide some context for the reviewer

Is there anything that requires special attention

Do you have any questions?

Is the change backward compatible?

Are there concerns around backward compatibility?

Provide any external context for the change, if any.

For example:

  • Kubernetes links that explain why the change is required
  • CSI spec related changes/catch-up that necessitates this patch
  • golang related practices that necessitates this change

Related issues

Mention any github issues relevant to this PR. Adding below line
will help to auto close the issue once the PR is merged.

Closes: #4688
Fixes: #4654

Future concerns

List items that are not part of the PR and do not impact it's
functionality, but are work items that can be taken up subsequently.

Checklist:

  • Commit Message Formatting: Commit titles and messages follow
    guidelines in the developer
    guide
    .
  • Reviewed the developer guide on Submitting a Pull
    Request
  • Pending release
    notes

    updated with breaking and/or notable changes for the next major release.
  • Documentation has been updated, if necessary.
  • Unit tests have been added, if necessary.
  • Integration tests have been added, if necessary.

Show available bot commands

These commands are normally not required, but in case of issues, leave any of
the following bot commands in an otherwise empty comment in this PR:

  • /retest ci/centos/<job-name>: retest the <job-name> after unrelated
    failure (please report the failure too!)

Copy link
Member

@nixpanic nixpanic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a nit to improve the readability of the code a little, looks quite good already

internal/cephfs/nodeserver.go Outdated Show resolved Hide resolved
@Sunnatillo Sunnatillo changed the title WIP:Set object lock for volumes for cephfs encryption WIP: Set object lock for volumes for cephfs encryption Jul 1, 2024
internal/cephfs/nodeserver.go Show resolved Hide resolved
}
defer ioctx.Destroy()

res, err := ioctx.LockExclusive(volOptions.VolID, lockName, string(lockCookie), lockDesc, lockDuration, &flags)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
res, err := ioctx.LockExclusive(volOptions.VolID, lockName, string(lockCookie), lockDesc, lockDuration, &flags)
res, err := ioctx.LockExclusive(volOptions.VolID, lockName, lockCookie, lockDesc, lockDuration, &flags)

lockCookie is already a string

internal/cephfs/nodeserver.go Outdated Show resolved Hide resolved

return nil
_, err = ioctx.Unlock(string(volID), lockName, lockCookie)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wont we get into the problem if we try to unlock if someone else is already holding the lock? in case of RWX and multiple nodes

log.DebugLog(ctx, "Lock successfully created for the volume ID %s", volID)

if res == 0 {
log.DebugLog(ctx, "cephfs: unlocking fscrypt on volume %q path %s", volID, stagingTargetPath)
return fscrypt.Unlock(ctx, volOptions.Encryption, stagingTargetPath, string(volID))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dont we need to release lock once the fscrypt.Unlock is done?

}
}()

return fmt.Errorf("There is already one file system with name %s", string(volID))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to update the error message

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jul 3, 2024

@Sunnatillo can you please check the CI failures?

@Sunnatillo Sunnatillo changed the title WIP: Set object lock for volumes for cephfs encryption Set object lock for volumes for cephfs encryption Jul 8, 2024
@Sunnatillo Sunnatillo marked this pull request as ready for review July 8, 2024 07:40
internal/cephfs/nodeserver.go Outdated Show resolved Hide resolved
internal/cephfs/nodeserver.go Outdated Show resolved Hide resolved
@Sunnatillo Sunnatillo force-pushed the set-mutex-lock/sunnat branch 2 times, most recently from cf7fa8d to 75eab26 Compare July 10, 2024 14:51
@nixpanic nixpanic added the component/cephfs Issues related to CephFS label Jul 11, 2024
Makefile Outdated Show resolved Hide resolved
@Sunnatillo Sunnatillo force-pushed the set-mutex-lock/sunnat branch 2 times, most recently from 2202913 to 80b84a8 Compare July 11, 2024 11:08
nixpanic
nixpanic previously approved these changes Jul 11, 2024
@nixpanic nixpanic requested a review from Madhu-1 July 11, 2024 12:39
Madhu-1
Madhu-1 previously approved these changes Jul 11, 2024
Copy link
Collaborator

@Madhu-1 Madhu-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @Sunnatillo 🎉

@nixpanic
Copy link
Member

@Mergifyio rebase

Copy link
Contributor

mergify bot commented Jul 11, 2024

rebase

❌ Pull request can't be updated with latest base branch changes

Mergify needs the author permission to update the base branch of the pull request.
@Nordix needs to authorize modification on its head branch.

The way fscrypt client handles metadata and policy creation
causing errors when multiple instances start simultaneously.
This commit adds a lock to ensure the initial setup
completes correctly, preventing race conditions and
mismatches.

Signed-off-by: Sunnatillo <sunnat.samadov@est.tech>
@nixpanic nixpanic requested a review from Madhu-1 July 11, 2024 13:13
@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.29

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.27

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.29

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.27

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.30

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.29

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.27

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/k8s-e2e-external-storage/1.28

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e-helm/k8s-1.28

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/mini-e2e/k8s-1.28

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/upgrade-tests-cephfs

@ceph-csi-bot
Copy link
Collaborator

/test ci/centos/upgrade-tests-rbd

@ceph-csi-bot ceph-csi-bot removed the ok-to-test Label to trigger E2E tests label Jul 11, 2024
@iPraveenParihar
Copy link
Contributor

@nixpanic, I see fscrypt.Unlock call for RBD fscrypt. Do we need to do the same here as well?
do we need to

if volOptions.isFileEncrypted() {
log.DebugLog(ctx, "rbd fscrypt: trying to unlock filesystem on %s image %s", stagingTargetPath, volOptions.VolID)
err = fscrypt.Unlock(ctx, volOptions.fileEncryption, stagingTargetPath, volOptions.VolID)
if err != nil {
return transaction, fmt.Errorf("file system encryption unlock in %s image %s failed: %w",
stagingTargetPath, volOptions.VolID, err)
}
}

@nixpanic
Copy link
Member

@nixpanic, I see fscrypt.Unlock call for RBD fscrypt. Do we need to do the same here as well?

No, for RBD that is not needed. RBD with a filesystem is never attached to more than one workernode at the same time, so initialization/unlocking of the filesystem will always be on a single node. It is different with ReadWriteMany RBD volumes in block-mode with encryption. The LUKS calls would benefit from a similar locking.

Copy link
Contributor

@iPraveenParihar iPraveenParihar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@mergify mergify bot merged commit e7762ac into ceph:devel Jul 11, 2024
40 checks passed
Copy link
Contributor

@NymanRobin NymanRobin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for fixing this 🎉

But I have a slight concern in the scenario when facrypt.Unlock fails. What do you think?


log.DebugLog(ctx, "cephfs: unlocking fscrypt on volume %q path %s", volID, stagingTargetPath)
err = fscrypt.Unlock(ctx, volOptions.Encryption, stagingTargetPath, string(volID))
if err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won’t this leave a hanging lock in case the fscrypt fails to unlock the filesystem?

I think the ”ceph unlock functionality” could be moved to a separate function and this function pushed to the defer stack after the exclusiveLock is successful?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about lockDuration variable on the LockExclusive function? When duration expires it will automatically unlock?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this I understand. But it cause unnecessary delays in my opinion that could easily be avoided. My point is that usually this is operation that would take seconds start to take minutes due to one failure

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes make sense to avoid unwanted delay in this case,As suggested ioctx.Unlock can be called defer right after the successful Lock.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created an issue for this so it is not forgotten: #4715

@z2000l
Copy link

z2000l commented Jul 23, 2024

Is there a config to use the lock? We built an image from devel branch (https://github.com/ceph/ceph-csi/tree/devel) and tried it out. However pods failed to come up and we got the following error:

Warning FailedMount 15s (x6 over 31s) kubelet MountVolume.MountDevice failed for volume "pvc-4642810a-7f00-4ae0-8583-ddbcee6ed314" : rpc error: code = Internal desc = Failed to lock volume ID 0001-0009-rook-ceph-00000000000000ad-d71d2949-db34-434e-8008-5497dad5c70b: rados: ret=-1, Operation not permitted

@NymanRobin
Copy link
Contributor

I have tested this and encountered the same error as @z2000l. @Sunnatillo, could you please document any additional settings required or anything we might be overlooking?

}
defer ioctx.Destroy()

res, err := ioctx.LockExclusive(volOptions.VolID, lockName, lockCookie, lockDesc, lockDuration, &flags)
Copy link
Contributor

@NymanRobin NymanRobin Jul 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When reading the code in go-ceph it seems to be that the object that we want to lock should exist and this won't create a new object? Are we sure there exist an object in rados corresponding to VolID, my initial assumption is no but I am not 100% confident, without further investigation

// LockExclusive takes an exclusive lock on an object.
func (ioctx *IOContext) LockExclusive(oid, name, cookie, desc string, duration time.Duration, flags *byte) (int, error) {

Edit: The object seems not to be needed

Copy link
Contributor

@NymanRobin NymanRobin Jul 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Continuing my dig here, I added the following logging before the LockExclusive

_, err = ioctx.Stat(volOptions.VolID)
if err != nil {
  log.ErrorLog(ctx, "The error when object is not here: %s", err)
  log.ErrorLog(ctx, "Object %s does not exist", volOptions.VolID)
}

I get these logs:

E0724 09:03:43.715019       1 nodeserver.go:156] ID: 142 Req-ID: 0001-0009-rook-ceph-0000000000000001-489ccf33-13a5-40fc-8460-7dd866bc44de The error when object is not here: rados: ret=-2, No such file or directory
E0724 09:03:43.715051       1 nodeserver.go:157] ID: 142 Req-ID: 0001-0009-rook-ceph-0000000000000001-489ccf33-13a5-40fc-8460-7dd866bc44de Object csi-vol-489ccf33-13a5-40fc-8460-7dd866bc44de does not exist
E0724 09:03:43.715477       1 utils.go:240] ID: 142 Req-ID: 0001-0009-rook-ceph-0000000000000001-489ccf33-13a5-40fc-8460-7dd866bc44de GRPC error: rpc error: code = Internal desc = Failed to lock volume ID 0001-0009-rook-ceph-0000000000000001-489ccf33-13a5-40fc-8460-7dd866bc44de: rados: ret=-1, Operation not permitted

This makes me highly suspicious... I however wonder why in Ceph they do not return -2, in case a lock is tried to be acquired on a object that does not exist? Maybe they have not thought of this scenario and it could be reported?I guess next step in this journey to the core would be to create the object first and see if the lock can the be acquired 😄

Edit: Actual problem is comments below, the object seems to not be needed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I know at least why the -1 comes, and it is correct that the permissions are not there!
After adding the following to the authentication I can acquire the lock:

ceph auth caps client.csi-cephfs-node \
  mon 'allow *' \
  mgr 'allow *' \
  osd 'allow *' \
  mds 'allow *'

But I think what is actually needed is the following:
ceph auth caps client.csi-cephfs-node rados 'allow rw'

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay to conclude this it seems the lock can be created without the object existing so that is all good and my ramblings from above can be ignored!

Except some more permission for the client.csi-cephfs-node is needed to be able to create the lock

However it is extremely slow currently for me, bringing up five pods with 3 replicas and after 5 minutes I have only one replica of each. The error reported is the following:

  Warning  FailedMount             3m32s (x2 over 5m47s)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[my-volume], unattached volumes=[my-volume kube-api-access-8cmvd]: timed out waiting for the condition
  Warning  FailedMount             81s (x11 over 7m35s)   kubelet                  MountVolume.MountDevice failed for volume "pvc-6afa784b-dc4f-44fb-bed9-8c6fe985c7e8" : rpc error: code = Internal desc = Lock is already held by another client and cookie pair for 0001-0009-rook-ceph-0000000000000001-c7f5d8f9-f9ca-4af8-8457-7ffd1bfb4437 volume
  Warning  FailedMount             74s                    kubelet                  Unable to attach or mount volumes: unmounted volumes=[my-volume], unattached volumes=[kube-api-access-8cmvd my-volume]: timed out waiting for the condition

This is the status of the pods after 8m46s:

$ kubectl get pods -n default 
NAME                            READY   STATUS              RESTARTS   AGE
deploy-pod-1-7f7789fd5f-92x4f   1/1     Running             0          8m46s
deploy-pod-1-7f7789fd5f-99qmp   1/1     Running             0          8m46s
deploy-pod-1-7f7789fd5f-dnr79   0/1     ContainerCreating   0          8m46s
deploy-pod-2-58c654874c-j2mvp   0/1     ContainerCreating   0          8m46s
deploy-pod-2-58c654874c-jh2wp   1/1     Running             0          8m46s
deploy-pod-2-58c654874c-svjws   1/1     Running             0          8m46s
deploy-pod-3-6674fc79f-rbtk2    1/1     Running             0          8m46s
deploy-pod-3-6674fc79f-w452p    1/1     Running             0          8m46s
deploy-pod-3-6674fc79f-w8wqb    0/1     ContainerCreating   0          8m46s
deploy-pod-4-6c958fb787-cqxlb   1/1     Running             0          8m46s
deploy-pod-4-6c958fb787-kvgbl   0/1     ContainerCreating   0          8m46s
deploy-pod-4-6c958fb787-q794c   1/1     Running             0          8m46s
deploy-pod-5-86b9bb47-5vmrq     0/1     ContainerCreating   0          8m45s
deploy-pod-5-86b9bb47-dd6v7     1/1     Running             0          8m45s
deploy-pod-5-86b9bb47-g5jvg     0/1     ContainerCreating   0          8m45s

After a whopping 13 minutes all the pods were finally running!

Should we actually poll the lock when trying to acquire, I think that might make it faster

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @NymanRobin for testing.
I also tested. it is slow.
It took me 8 minutes to 6 replicas to be ready.
We should definitely improve it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can write up a ticket for the performance and for the permissions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Concurrency issue when pvc is mounted to multiple pods
7 participants